40 research outputs found

    The influence of dynamics and speech on understanding humanoid facial expressions

    Get PDF
    Human communication relies mostly on nonverbal signals expressed through body language. Facial expressions, in particular, convey emotional information that allows people involved in social interactions to mutually judge the emotional states and to adjust its behavior appropriately. First studies aimed at investigating the recognition of facial expressions were based on static stimuli. However, facial expressions are rarely static, especially in everyday social interactions. Therefore, it has been hypothesized that the dynamics inherent in a facial expression could be fundamental in understanding its meaning. In addition, it has been demonstrated that nonlinguistic and linguistic information can contribute to reinforce the meaning of a facial expression making it easier to be recognized. Nevertheless, few studies have been performed on realistic humanoid robots. This experimental work aimed at demonstrating the human-like expressive capability of a humanoid robot by examining whether the effect of motion and vocal content influenced the perception of its facial expressions. The first part of the experiment aimed at studying the recognition capability of two kinds of stimuli related to the six basic expressions (i.e. anger, disgust, fear, happiness, sadness, and surprise): static stimuli, that is, photographs, and dynamic stimuli, that is, video recordings. The second and third parts were focused on comparing the same six basic expressions performed by a virtual avatar and by a physical robot under three different conditions: (1) muted facial expressions, (2) facial expressions with nonlinguistic vocalizations, and (3) facial expressions with an emotionally neutral verbal sentence. The results show that static stimuli performed by a human being and by the robot were more ambiguous than the corresponding dynamic stimuli on which motion and vocalization were associated. This hypothesis has been also investigated with a 3-dimensional replica of the physical robot demonstrating that even in case of a virtual avatar, dynamic and vocalization improve the emotional conveying capability

    Can a Humanoid Face be Expressive? A Psychophysiological Investigation

    Get PDF
    Non-verbal signals expressed through body language play a crucial role in multi-modal human communication during social relations. Indeed, in all cultures, facial expressions are the most universal and direct signs to express innate emotional cues. A human face conveys important information in social interactions and helps us to better understand our social partners and establish empathic links. Latest researches show that humanoid and social robots are becoming increasingly similar to humans, both esthetically and expressively. However, their visual expressiveness is a crucial issue that must be improved to make these robots more realistic and intuitively perceivable by humans as not different from them. This study concerns the capability of a humanoid robot to exhibit emotions through facial expressions. More specifically, emotional signs performed by a humanoid robot have been compared with corresponding human facial expressions in terms of recognition rate and response time. The set of stimuli included standardized human expressions taken from an Ekman-based database and the same facial expressions performed by the robot. Furthermore, participants’ psychophysiological responses have been explored to investigate whether there could be differences induced by interpreting robot or human emotional stimuli. Preliminary results show a trend to better recognize expressions performed by the robot than 2D photos or 3D models. Moreover, no significant differences in the subjects’ psychophysiological state have been found during the discrimination of facial expressions performed by the robot in comparison with the same task performed with 2D photos and 3D models

    Socioeconomic Patterning of Childhood Overweight Status in Europe

    Get PDF
    There is growing evidence of social disparities in overweight among European children. This paper examines whether there is an association between socioeconomic inequality and prevalence of child overweight in European countries, and if socioeconomic disparities in child overweight are increasing. We analyse cross-country comparisons of household inequality and child overweight prevalence in Europe and review within-country variations over time of childhood overweight by social grouping, drawn from a review of the literature. Data from 22 European countries suggest that greater inequality in household income is positively associated with both self-reported and measured child overweight prevalence. Moreover, seven studies from four countries reported on the influence of socioeconomic factors on the distribution of child overweight over time. Four out of seven reported widening social disparities in childhood overweight, a fifth found statistically significant disparities only in a small sub-group, one found non-statistically significant disparities, and a lack of social gradient was reported in the last study. Where there is evidence of a widening social gradient in child overweight, it is likely that the changes in lifestyles and dietary habits involved in the increase in the prevalence of overweight have had a less favourable impact in low socio-economic status groups than in the rest of the population. More profound structural changes, based on population-wide social and environmental interventions are needed to halt the increasing social gradient in child overweight in current and future generations

    ï»żNotulae to the Italian alien vascular flora: 12

    Get PDF
    In this contribution, new data concerning the distribution of vascular flora alien to Italy are presented. It includes new records, confirmations, exclusions, and status changes for Italy or for Italian administrative regions. Nomenclatural and distribution updates published elsewhere are provided as Suppl. material 1

    Barbarea vulgaris Glucosinolate Phenotypes Differentially Affect Performance and Preference of Two Different Species of Lepidopteran Herbivores

    Get PDF
    The composition of secondary metabolites and the nutritional value of a plant both determine herbivore preference and performance. The genetically determined glucosinolate pattern of Barbarea vulgaris can be dominated by either glucobarbarin (BAR-type) or by gluconasturtiin (NAS-type). Because of the structural differences, these glucosinolates may have different effects on herbivores. We compared the two Barbarea chemotypes with regards to the preference and performance of two lepidopteran herbivores, using Mamestra brassicae as a generalist and Pieris rapae as a specialist. The generalist and specialist herbivores did not prefer either chemotype for oviposition. However, larvae of the generalist M. brassicae preferred to feed and performed best on NAS-type plants. On NAS-type plants, 100% of the M. brassicae larvae survived while growing exponentially, whereas on BAR-type plants, M. brassicae larvae showed little growth and a mortality of 37.5%. In contrast to M. brassicae, the larval preference and performance of the specialist P. rapae was unaffected by plant chemotype. Total levels of glucosinolates, water soluble sugars, and amino acids of B. vulgaris could not explain the poor preference and performance of M. brassicae on BAR-type plants. Our results suggest that difference in glucosinolate chemical structure is responsible for the differential effects of the B. vulgaris chemotypes on the generalist herbivore

    Development of a cognitive and emotional control system for a social humanoid robot

    No full text
    In the last years, an increasing number of social robots have come out from science fiction novels and movies becoming reality. These social robots are interesting not only in the science fiction world but also in the scientific research field. Building socially intelligent robots in a human-centred manner can help us to better understand ourselves and the psychological and behavioural dynamics behind a social interaction. The primary and most important function of a social robot is to appear “believable” to human observers and interaction partners. This means that a social robot must be able to express its own state and perceive the state of its social environment in a human-like way in order to act successfully, i.e., it must possess a “social intelligence” for maintaining the illusion of dealing with a real human being. The term “social intelligence” includes aspects both of appearance and of behaviour that are factors tightly coupled with each other. For example, a social robot designed to be aesthetically similar to an animal is expected to have limited functionalities. Instead, a humanoid robot that physically resembles a human being elicits strong expectations about its behavioural and cognitive capabilities and if such expectations are not being met then a person is likely to experience disorientation and disappointment. The believability of a social robot is not only an objective matter but it also depends on a subjective evaluation of the person involved in the interaction. A social robot will be judged believable or not on the base of the individual experience and background of the person who interacts with the robot. Clearly, it is not possible to know what is really going on in the mind of that person during the interaction. Nevertheless it is possible to analyse and evaluate the psychophysiological and behavioural reactions of the subject to obtain useful cues for improving the quality and performance of the social interaction. Based on these considerations, this thesis aims to answer two research questions: (1) How can a robot be believable and behave in a socially acceptable manner? and (2) How to evaluate the social interaction of the subject with the robot?. This thesis presents the development of a novel software architecture for controlling a humanoid robot able to reproduce realistic facial expressions on one hand and the development of a software platform for analysing human-robot interaction studies from a point of view of the subject who interacts with the robot on the other hand. The architecture developed for controlling the robot is based on a hybrid Deliberative/Reactive paradigm to make the robot able to quickly react to the events, i.e., reactive behaviours, but even to perform more complex high-level tasks that require reasoning, i.e., deliberative behaviours. The integration of a deliberative system based on a rule expert system with the reactive system makes the robot controllable through a declarative language that is closer to the human natural way of thinking. An interactive graphical interface provides the user with a tool for controlling the behaviour of the robot. Thus, the robot becomes a research tool suitable for investigating its “being social and believable” and testing social behavioural models defined by set of rules. The hybrid architecture for controlling the robot has proven to be a good design for making the robot able to perform complex animations and convey emotional stimuli. The robot can perceive and interpret social cues of the environment, react emotionally to people in the surrounding and follow the person who attracted its attention. The platform developed for studying the subject’s psychophysiological and behavioural reactions during the interaction with a robot is designed to be modular and configurable. On the base of the experiment specifications, multiple and heterogeneous sensors with different hardware and software characteristics can be integrated into the platform. Collecting and fusing together complementary and redundant subject-related information makes possible to obtain an enriched scene interpretation. Indeed merging different types of data can highlight important information that may otherwise remain hidden if each type of data is analysed separately. The multimodal data acquisition platform was used in the context of a research project aimed at evaluating the interaction of normally developing and autistic children with social robots. The results demonstrated the reliability and effectiveness of the platform in storing different types of data synchronously. In multimodal data fusion systems, the problem of keeping the temporal coherence between data coming from different sensors is fundamental. The availability of synchronized heterogeneous data acquired by the platform such as self-report annotations, physiological measures and behavioural observations facilitated the analysis and evaluation of the interaction of the subjects with the robot

    FACE: A software infrastructure for controlling a robotic face for autism therapy

    No full text
    People with autism are known to possess deficits in processing mental states and they might have unusual ways of learning, paying attention, and reacting to different sensations. FACE (Facial Automation for Conveying Emotions) is a humanoid robot capable of expressing and conveying emotions to enable autistic people to better deal with emotional and expressive information. To this purpose, this thesis concerns about developing a software framework for controlling the servo motors actuating FACE, responsible for defining facial expressions of the android. A set of tool has been developed to enable the psychologists to manage the robot during the therapy sessions and to analyze acquired data, such as physiological signals and recorded video, after the therapies. Moreover a 3D simulator of FACE is being developed in order to allow therapists to prepare expressions and facial behaviors even off-line in a compatible way with the capabilities of the real robot. The algorithm used to modify the 3D facial mesh is based on a physical model allowing more realistic expressions

    A dialogue with a virtual imaginary interlocutor as a form of a psychological support for well-being

    No full text
    Computer, tablet and smartphone are tools that increasingly accompany us during everyday activities. Given the booming use of the virtual reality and the wide range of people who have access to it, people are increasingly presented with an online alternative to the support of professionals, therapeutic groups organized by healthcare institutions, or significant others (such as family, friends and colleagues). This can be used as a tool for personal development and to cope with stress. Our research program includes creating a virtual reality application to sustain well-being and improve quality of life. It assumes that avatars, representations of a person in the cyberspace, will provide support in the form of a virtual conversation. Dialogue with an imaginary person is as a supportive technique in a stressful situation as creating the list of solutions and on a long term period it can create a specific way to reach the desired change

    Development and Testing of a Multimodal Acquisition Platform for Human-Robot Interaction Affective Studies

    No full text
    Human-Robot Interaction (HRI) studies have recently received increasing attention in various fields, from academic communities to engineering firms and the media. Many researchers have been focusing on the development of tools to evaluate the performance of robotic systems and studying how to extend the range of robot interaction modalities and contexts. Because people are emotionally engaged when interacting with computers and robots, researchers have been focusing attention on the study of affective human-robot interaction. This new field of study requires the integration of various approaches typical of different research backgrounds, such as psychology and engineering, to gain more insight into the human-robot affective interaction. In this paper, we report the development of a multimodal acquisition platform called HIPOP (Human Interaction Pervasive Observation Platform). HIPOP is a modular data-gathering platform based on various hardware and software units that can be easily used to create a custom acquisition setup for HRI studies. The platform uses modules for physiological signals, eye gaze, video and audio acquisition to perform an integrated affective and behavioral analysis. It is also possible to include new hardware devices into the platform. The open-source hardware and software revolution has made many high-quality commercial and open-source products freely available for HRI and HCI research. These devices are currently most often used for data acquisition and robot control, and they can be easily included in HIPOP. Technical tests demonstrated the ability of HIPOP to reliably acquire a large set of data in terms of failure management and data synchronization. The platform was able to automatically recover from errors and faults without affecting the entire system, and the misalignment observed in the acquired data was not significant and did not affect the multimodal analysis. HIPOP was also tested in the context of the FACET (FACE Therapy) project, in which a humanoid robot called FACE (Facial Automaton for Conveying Emotions) was used to convey affective stimuli to children with autism. In the FACET project, psychologists without technical skills were able to use HIPOP to collect the data needed for their experiments without dealing with hardware issues, data integration challenges, or synchronization problems. The FACET case study highlighted the real core feature of the HIPOP platform (i.e., multimodal data integration and fusion). This analytical approach allowed psychologists to study both behavioral and psychophysiological reactions to obtain a more complete view of the subjects’ state during interaction with the robot. These results indicate that HIPOP could become an innovative tool for HRI affective studies aimed at inferring a more detailed view of a subject’s feelings and behavior during interaction with affective and empathic robots
    corecore